Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Med Imaging ; PP2024 Feb 23.
Article in English | MEDLINE | ID: mdl-38393846

ABSTRACT

Synthesis of unavailable imaging modalities from available ones can generate modality-specific complementary information and enable multi-modality based medical images diagnosis or treatment. Existing generative methods for medical image synthesis are usually based on cross-modal translation between acquired and missing modalities. These methods are usually dedicated to specific missing modality and perform synthesis in one shot, which cannot deal with varying number of missing modalities flexibly and construct the mapping across modalities effectively. To address the above issues, in this paper, we propose a unified Multi-modal Modality-masked Diffusion Network (M2DN), tackling multi-modal synthesis from the perspective of "progressive whole-modality inpainting", instead of "cross-modal translation". Specifically, our M2DN considers the missing modalities as random noise and takes all the modalities as a unity in each reverse diffusion step. The proposed joint synthesis scheme performs synthesis for the missing modalities and self-reconstruction for the available ones, which not only enables synthesis for arbitrary missing scenarios, but also facilitates the construction of common latent space and enhances the model representation ability. Besides, we introduce a modality-mask scheme to encode availability status of each incoming modality explicitly in a binary mask, which is adopted as condition for the diffusion model to further enhance the synthesis performance of our M2DN for arbitrary missing scenarios. We carry out experiments on two public brain MRI datasets for synthesis and downstream segmentation tasks. Experimental results demonstrate that our M2DN outperforms the state-of-the-art models significantly and shows great generalizability for arbitrary missing modalities.

2.
IEEE Trans Med Imaging ; 43(2): 723-733, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37756173

ABSTRACT

Coronary artery segmentation is critical for coronary artery disease diagnosis but challenging due to its tortuous course with numerous small branches and inter-subject variations. Most existing studies ignore important anatomical information and vascular topologies, leading to less desirable segmentation performance that usually cannot satisfy clinical demands. To deal with these challenges, in this paper we propose an anatomy- and topology-preserving two-stage framework for coronary artery segmentation. The proposed framework consists of an anatomical dependency encoding (ADE) module and a hierarchical topology learning (HTL) module for coarse-to-fine segmentation, respectively. Specifically, the ADE module segments four heart chambers and aorta, and thus five distance field maps are obtained to encode distance between chamber surfaces and coarsely segmented coronary artery. Meanwhile, ADE also performs coronary artery detection to crop region-of-interest and eliminate foreground-background imbalance. The follow-up HTL module performs fine segmentation by exploiting three hierarchical vascular topologies, i.e., key points, centerlines, and neighbor connectivity using a multi-task learning scheme. In addition, we adopt a bottom-up attention interaction (BAI) module to integrate the feature representations extracted across hierarchical topologies. Extensive experiments on public and in-house datasets show that the proposed framework achieves state-of-the-art performance for coronary artery segmentation.


Subject(s)
Coronary Artery Disease , Deep Learning , Humans , Heart/diagnostic imaging , Aorta , Image Processing, Computer-Assisted
3.
IEEE Trans Med Imaging ; 43(1): 558-569, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37695966

ABSTRACT

Current deep learning-based reconstruction models for accelerated multi-coil magnetic resonance imaging (MRI) mainly focus on subsampled k-space data of single modality using convolutional neural network (CNN). Although dual-domain information and data consistency constraint are commonly adopted in fast MRI reconstruction, the performance of existing models is still limited mainly by three factors: inaccurate estimation of coil sensitivity, inadequate utilization of structural prior, and inductive bias of CNN. To tackle these challenges, we propose an unrolling-based joint Cross-Attention Network, dubbed as jCAN, using deep guidance of the already acquired intra-subject data. Particularly, to improve the performance of coil sensitivity estimation, we simultaneously optimize the latent MR image and sensitivity map (SM). Besides, we introduce Gating layer and Gaussian layer into SM estimation to alleviate the "defocus" and "over-coupling" effects and further ameliorate the SM estimation. To enhance the representation ability of the proposed model, we deploy Vision Transformer (ViT) and CNN in the image and k-space domains, respectively. Moreover, we exploit pre-acquired intra-subject scan as reference modality to guide the reconstruction of subsampled target modality by resorting to the self- and cross-attention scheme. Experimental results on public knee and in-house brain datasets demonstrate that the proposed jCAN outperforms the state-of-the-art methods by a large margin in terms of SSIM and PSNR for different acceleration factors and sampling masks. Our code is publicly available at https://github.com/sunkg/jCAN.


Subject(s)
Brain , Knee Joint , Brain/diagnostic imaging , Magnetic Resonance Imaging , Neural Networks, Computer , Normal Distribution , Image Processing, Computer-Assisted
4.
Med Phys ; 48(10): 6453-6463, 2021 Oct.
Article in English | MEDLINE | ID: mdl-34053089

ABSTRACT

PURPOSE: Deformable image registration is a fundamental task in medical imaging. Due to the large computational complexity of deformable registration of volumetric images, conventional iterative methods usually face the tradeoff between the registration accuracy and the computation time in practice. In order to boost the performance of deformable registration in both accuracy and runtime, we propose a fast unsupervised convolutional neural network for deformable image registration. METHODS: The proposed registration model FDRN possesses a compact encoder-decoder network architecture which employs a pair of fixed and moving images as input and outputs a three-dimensional displacement vector field (DVF) describing the offsets between the corresponding voxels in the fixed and moving images. In order to efficiently utilize the memory resources and enlarge the model capacity, we adopt additive forwarding instead of channel concatenation and deepen the network in each encoder and decoder stage. To facilitate the learning efficiency, we leverage skip connection within the encoder and decoder stages to enable residual learning and employ an auxiliary loss at the bottom layer with lowest resolution to involve deep supervision. Particularly, the low-resolution auxiliary loss is weighted by an exponentially decayed parameter during the training phase. In conjunction with the main loss in high-resolution grid, a coarse-to-fine learning strategy is achieved. Last but not least, we involve a proposed multi-label segmentation loss (SL) to improve the network performance in Dice score in case the segmentation prior is available. Comparing to the SL using average Dice score, the proposed SL does not require additional memory in the training phase and improves the registration accuracy efficiently. RESULTS: We evaluated FDRN on multiple brain MRI datasets from different aspects including registration accuracy, model generalizability, and model analysis. Experimental results demonstrate that FDRN performs better than the state-of-the-art registration method VoxelMorph by 1.46% in Dice score in LPBA40. In addition to LPBA40, FDRN obtains the best Dice and NCC among all the investigated methods in the unseen MRI datasets including CUMC12, MGH10, ABIDE, and ADNI by a large margin. CONCLUSIONS: The proposed FDRN provides better performance than the existing state-of-the-art registration methods for brain MR images by resorting to the compact autoencoder structure and efficient learning. Additionally, FDRN is a generalized framework for image registration which is not confined to a particular type of medical images or anatomy.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Magnetic Resonance Imaging , Neuroimaging , Tomography, X-Ray Computed
SELECTION OF CITATIONS
SEARCH DETAIL
...